16 research outputs found

    Software Engineering Research/Developer Collaborations in 2004 (C104)

    Get PDF
    In 2004, six collaborations between software engineering technology providers and NASA software development personnel deployed a total of five software engineering technologies (for references, see Section 7.2) on the NASA projects. The main purposes were to benefit the projects, infuse the technologies if beneficial into NASA, and give feedback to the technology providers to improve the technologies. Each collaboration project produced a final report (for references, see Section 7.1). Section 2 of this report summarizes each project, drawing from the final reports and communications with the software developers and technology providers. Section 3 indicates paths to further infusion of the technologies into NASA practice. Section 4 summarizes some technology transfer lessons learned. Section 6 lists the acronyms used in this report

    Bridging the Gap Between Requirements and Simulink Model Analysis

    Get PDF
    Formal verification and simulation are powerful tools for the verification of requirements against complex systems. Requirements are developed in early stages of the software lifecycle and are typically expressed in natural language. There is a gap between such requirements and their software implementations.We present a framework that bridges this gap by supporting a tight integration and feedback loop between high-level requirements and their analysis against software artifacts. Our framework implements an analysis portal within the fret requirements elicitation tool, thus forming an end-to-end, open-source environment where requirements are written in an intuitive, structured natural language, and are verified automatically against Simulink models

    Simple Sensitivity Analysis for Orion GNC

    Get PDF
    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found

    Bridging the Gap Between Requirements and Model Analysis : Evaluation on Ten Cyber-Physical Challenge Problems

    Get PDF
    Formal verfication and simulation are powerful tools to validate requirements against complex systems. [Problem] Requirements are developed in early stages of the software lifecycle and are typically written in ambiguous natural language. There is a gap between such requirements and formal notations that can be used by verification tools, and lack of support for proper association of requirements with software artifacts for verification. [Principal idea] We propose to write requirements in an intuitive, structured natural language with formal semantics, and to support formalization and model/code verification as a smooth, well-integrated process. [Contribution] We have developed an end-to-end, open source requirements analysis framework that checks Simulink models against requirements written in structured natural language. Our framework is built in the Formal Requirements Elicitation Tool (fret); we use fret's requirements language named fretish, and formalization of fretish requirements in temporal logics. Our proposed framework contributes the following features: 1) automatic extraction of Simulink model information and association of fretish requirements with target model signals and components; 2) translation of temporal logic formulas into synchronous dataflow cocospec specifications as well as Simulink monitors, to be used by verification tools; we establish correctness of our translation through extensive automated testing; 3) interpretation of counterexamples produced by verification tools back at requirements level. These features support a tight integration and feedback loop between high level requirements and their analysis. We demonstrate our approach on a major case study: the Ten Lockheed Martin Cyber-Physical, aerospace-inspired challenge problems

    The NASA SARP Software Research Infusion Initiative

    Get PDF
    A viewgraph presentation describing the NASA Software Assurance Research Program (SARP) research infusion projects is shown. The topics include: 1) Background/Motivation; 2) Proposal Solicitation Process; 3) Proposal Evaluation Process; 4) Overview of Some Projects to Date; and 5) Lessons Learned

    AutoBayes Program Synthesis System Users Manual

    Get PDF
    Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code

    MARGInS: Model-Based Analysis of Realizable Goals in Systems

    No full text
    Under NASAs Constellation effort, the Exploration Technology Development Program funded research toward a system validation capability that applied machine learning and test-case generation techniques to the analysis of black-box system behavior. The behavior analysis capability scaled to spaces of hundreds of input parameters and tens of thousands of test cases. Aerospace systems at the vehicle level, especially those systems which contain some level of autonomy, are best described by hybrid and non-linear mathematics. Even simplified models of such systems need parameter dimensionalities in the hundreds or thousands of parameters in order to capture sufficient fidelity. The System Safety Assessments (such as those described in the SAE ARP 4761A Safety Assessment Process guidelines) for these systems are prone to errorinteractions between the vehicles subsystems are complex, and can display emergent behaviors. NASA captured this new analysis in the Model-based Analysis of Realizable Goals in Systems (MARGInS) tool and applied it to the Pad Abort 1 (PA-1) simulation as part of the independent validation and verification cycle before the PA-1 flight test in May of 2010. MARGInS evaluated the adherence of the high-fidelity simulation to its requirements, and deter- mined the margins to failure from the expected nominal input conditions. Following the PA-1 test, the capabilities within the MARGInS framework have been extended with sophisticated statistical and white-box test case generation techniques and applied to other NASA missions. The frame- work now includes a critical factors analysis that was applied to NASAs Orion simulation and design. NASAs Aeronautics Research Mission Directorate (ARMD) leveraged the existing MARGInS framework for work on aviation safety for civil transport vehicles and for research on autonomy issues. The NASA ARMD effort created a time series output prediction capability that has been used to characterize trajectories for a plane with an adaptive control system, and a safety boundary detection capability that has been applied to an air traffic control concept of operation for the Federal Aviation Administration. The statistical and machine- learning based techniques within MARGInS have been successfully combined with concolic execution to improve the coverage of a critical unit by driving system-level inputs. The use case driving the concolic execution and MARGInS integration was inspired by the Air France 447 disaster in which the loss of a critical functionality (the airspeed calculation from the pitot tubes) led to loss of the entire plane with the people aboard. To illustrate capabilities and limitations, we will highlight the analyses for the applications listed above. We will then discuss the future plans for MARGInS and its interfaces with other tools
    corecore